Regret Bound by Variation for Online Convex Optimization

نویسندگان

  • Tianbao Yang
  • Rong Jin
  • Mehrdad Mahdavi
چکیده

In (Hazan and Kale, 2008), the authors showed that the regret of the Follow the Regularized Leader (FTRL) algorithm for online linear optimization can be bounded by the total variation of the cost vectors. In this paper, we extend this result to general online convex optimization. We first analyze the limitations of the FTRL algorithm in (Hazan and Kale, 2008) when applied to online convex optimization, and extend the definition of variation to a sequential variation which is shown to be a lower bound of the total variation. We then present two novel algorithms that bound the regret by the sequential variation of cost functions. Unlike previous approaches that maintain a single sequence of solutions, the proposed algorithms maintain two sequences of solutions that makes it possible to achieve a variation-based regret bound for online convex optimization.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Recursive Exponential Weighting for Online Non-convex Optimization

In this paper, we investigate the online non-convex optimization problem which generalizes the classic online convex optimization problem by relaxing the convexity assumption on the cost function. For this type of problem, the classic exponential weighting online algorithm has recently been shown to attain a sub-linear regret of O( √ T log T ). In this paper, we introduce a novel recursive stru...

متن کامل

Trading regret for efficiency: online convex optimization with long term constraints

In this paper we propose efficient algorithms for solving constrained online convex optimization problems. Our motivation stems from the observation that most algorithms proposed for online convex optimization require a projection onto the convex set K from which the decisions are made. While the projection is straightforward for simple shapes (e.g., Euclidean ball), for arbitrary complex sets ...

متن کامل

Online Optimization with Gradual Variations

We study the online convex optimization problem, in which an online algorithm has to make repeated decisions with convex loss functions and hopes to achieve a small regret. We consider a natural restriction of this problem in which the loss functions have a small deviation, measured by the sum of the distances between every two consecutive loss functions, according to some distance metrics. We ...

متن کامل

Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online Learning with True and Noisy Gradient

This work focuses on dynamic regret of online convex optimization that compares the performance of online learning to a clairvoyant who knows the sequence of loss functions in advance and hence selects the minimizer of the loss function at each step. By assuming that the clairvoyant moves slowly (i.e., the minimizers change slowly), we present several improved variationbased upper bounds of the...

متن کامل

A Low Complexity Algorithm with $O(\sqrt{T})$ Regret and Finite Constraint Violations for Online Convex Optimization with Long Term Constraints

This paper considers online convex optimization over a complicated constraint set, which typically consists of multiple functional constraints and a set constraint. The conventional Zinkevich’s projection based online algorithm (Zinkevich 2013) can be difficult to implement due to the potentially high computation complexity of the projection operation. In this paper, we relax the functional con...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1111.6337  شماره 

صفحات  -

تاریخ انتشار 2011